Sequence Encoders Enable Large-Scale Lexical Modeling: Reply to Bowers and Davis (2009)

نویسندگان

  • Daragh E. Sibley
  • Christopher T. Kello
  • David C. Plaut
  • Jeffrey L. Elman
چکیده

Sibley, Kello, Plaut, and Elman (2008) proposed the sequence encoder as a model that learns fixed-width distributed representations of variable-length sequences. In doing so, the sequence encoder overcomes problems that have restricted models of word reading and recognition to processing only monosyllabic words. Bowers and Davis (in press) recently claimed that the sequence encoder does not actually overcome the relevant problems, and hence is not a useful component of large-scale word reading models. In this reply, it is noted that the sequence encoder has facilitated the creation of large-scale word reading models. The reasons for this success are explained, and stand as counterarguments to claims made by Bowers and Davis.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Postscript: Some Final Thoughts on Grandmother Cells, Distributed Representations, and PDP Models of Cognition

Sciences, 26, 610–611. Hubel, D. (1995). Eye, brain, and vision. New York, NY: Scientific American Library. Hubel, D. H., & Wiesel, T. N. (1968). Receptive fields and functional architecture of monkey striate cortex. Journal of Physiology: London, 195, 215–243. Keysers, C., Xiao, D. K., Foldiak, P., & Perrett, D. I. (2001). The speed of sight. Journal of Cognitive Neuroscience, 13, 90–101. McCl...

متن کامل

Empirical and computational support for context-dependent representations of serial order: reply to Bowers, Damian, and Davis (2009).

J. S. Bowers, M. F. Damian, and C. J. Davis critiqued the computational model of serial order memory put forth in M. Botvinick and D. C. Plaut, purporting to show that the model does not generalize in a way that people do. They attributed this supposed failure to the model's dependence on context-dependent representations, translating this argument into a general critique of all parallel distri...

متن کامل

Modeling Mono- and Bisyllabic Naming 1 Learning Orthographic and Phonological Representations in Models of Monosyllabic and Bisyllabic Naming

Most current models of word naming are restricted to processing monosyllabic words and pseudowords. This limitation stems from difficulties in representing the orthographic and phonological codes for words varying substantially in length. Sibley, Kello, Plaut, & Elman (2008) described an extension of the simple recurrent network architecture, called the sequence encoder, that learned orthograph...

متن کامل

Lexical Bundles in English Abstracts of Research Articles Written by Iranian Scholars: Examples from Humanities

This paper investigates a special type of recurrent expressions, lexical bundles, defined as a sequence of three or more words that co-occur frequently in a particular register (Biber et al., 1999). Considering the importance of this group of multi-word sequences in academic prose, this study explores the forms and syntactic structures of three- and four-word bundles in English abstracts writte...

متن کامل

Deep encoding of etymological information in TEI

In this paper we provide a systematic and comprehensive set of modeling principles for representing etymological data in digital dictionaries using TEI. The purpose is to integrate in one coherent framework both digital representations of legacy dictionaries and born-digital lexical databases that are constructed manually or semi-automatically. We provide examples from many different types of e...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • Cognitive science

دوره 33 7  شماره 

صفحات  -

تاریخ انتشار 2009